3 research outputs found

    An Error-Based Approximation Sensing Circuit for Event-Triggered, Low Power Wearable Sensors

    Get PDF
    Event-based sensors have the potential to optimize energy consumption at every stage in the signal processing pipeline, including data acquisition, transmission, processing and storage. However, almost all state-of-the-art systems are still built upon the classical Nyquist-based periodic signal acquisition. In this work, we design and validate the Polygonal Approximation Sampler (PAS), a novel circuit to implement a general-purpose event-based sampler using a polygonal approximation algorithm as the underlying sampling trigger. The circuit can be dynamically reconfigured to produce a coarse or a detailed reconstruction of the analog input, by adjusting the error threshold of the approximation. The proposed circuit is designed at the Register Transfer Level and processes each input sample received from the ADC in a single clock cycle. The PAS has been tested with three different types of archetypal signals captured by wearable devices (electrocardiogram, accelerometer and respiration data) and compared with a standard periodic ADC. These tests show that single-channel signals, with slow variations and constant segments (like the used single-lead ECG and the respiration signals) take great advantage from the used sampling technique, reducing the amount of data used up to 99% without significant performance degradation. At the same time, multi-channel signals (like the six-dimensional accelerometer signal) can still benefit from the designed circuit, achieving a reduction factor up to 80% with minor performance degradation. These results open the door to new types of wearable sensors with reduced size and higher battery lifetime

    E2CNN: Ensembles of Convolutional Neural Networks to Improve Robustness Against Memory Errors in Edge-Computing Devices

    No full text
    To reduce energy consumption, it is possible to operate embedded systems at sub-nominal conditions (e.g., reduced voltage, limited eDRAM refresh rate) that can introduce bit errors in their memories. These errors can affect the stored values of CNN weights and activations, compromising their accuracy. In this paper, we introduce Embedded Ensemble CNNs (E2CNNs), our architectural design methodology to conceive ensembles of convolutional neural networks to improve robustness against memory errors compared to a single-instance network. Ensembles of CNNs have been previously proposed to increase accuracy at the cost of replicating similar or different architectures. Unfortunately, SoA ensembles do not suit well embedded systems, in which memory and processing constraints limit the number of deployable models. Our proposed architecture solves that limitation applying SoA compression methods to produce an ensemble with the same memory requirements of the original architecture, but with improved error robustness. Then, as part of our new E2CNNs design methodology, we propose a heuristic method to automate the design of the voter-based ensemble architecture that maximizes accuracy for the expected memory error rate while bounding the design effort. To evaluate the robustness of E2CNNs for different error types and densities, and their ability to achieve energy savings, we propose three error models that simulate the behavior of SRAM and eDRAM operating at sub-nominal conditions. Our results show that E2CNNs achieves energy savings of up to 80% for LeNet-5, 90% for AlexNet, 60% for GoogLeNet, 60% for MobileNet and 60% for an optimized industrial CNN, while minimizing the impact on accuracy. Furthermore, the memory size can be decreased up to 54% by reducing the number of members in the ensemble, with a more limited impact on the original accuracy than obtained through pruning alone

    Running Efficiently CNNs on the Edge Thanks to Hybrid SRAM-RRAM In-Memory Computing

    No full text
    The increasing size of Convolutional Neural Networks (CNNs) and the high computational workload required for inference pose major challenges for their deployment on resource-constrained edge devices. In this paper, we address them by proposing a novel In-Memory Computing (IMC) architecture. Our IMC strategy allows us to efficiently perform arithmetic operations based on bitline computing, enabling a high degree of parallelism while reducing energy-costly data transfers. Moreover, it features a hybrid memory structure, where a portion of each subarray, dedicated to storing CNN weights, is implemented as high-density, zero-standby-power Resistive RAM. Finally, it exploits an innovative method for storing quantized weights based on their value, named Weight Data Mapping (WDM), which further increases efficiency. Compared to state-of-the-art IMC alternatives, our solution provides up to 93% improvements in energy efficiency and up to 6x less run-time when performing inference on Mobilenet and AlexNet neural networks
    corecore